22 research outputs found

    A Highly Available Cluster of Web Servers with Increased Storage Capacity

    Get PDF
    Ponencias de las Decimoséptimas Jornadas de Paralelismo de la Universidad de Castilla-La Mancha celebradas el 18,19 y 20 de septiembre de 2006 en AlbaceteWeb servers scalability has been traditionally solved by improving software elements or increasing hardware resources of the server machine. Another approach has been the usage of distributed architectures. In such architectures, usually, file al- location strategy has been either full replication or full distribution. In previous works we have showed that partial replication offers a good balance between storage capacity and reliability. It offers much higher storage capacity while reliability may be kept at an equivalent level of that from fully replicated solutions. In this paper we present the architectural details of Web cluster solutions adapted to partial replication. We also show that partial replication does not imply a penalty in performance over classical fully replicated architectures. For evaluation purposes we have used a simulation model under the OMNeT++ framework and we use mean service time as a performance comparison metric.Publicad

    Neutrino interaction classification with a convolutional neural network in the DUNE far detector

    Get PDF
    Documento escrito por un elevado nĂșmero de autores/as, solo se referencia el/la que aparece en primer lugar y los/as autores/as pertenecientes a la UC3M.The Deep Underground Neutrino Experiment is a next-generation neutrino oscillation experiment that aims to measure CP-violation in the neutrino sector as part of a wider physics program. A deep learning approach based on a convolutional neural network has been developed to provide highly efficient and pure selections of electron neutrino and muon neutrino charged-current interactions. The electron neutrino (antineutrino) selection efficiency peaks at 90% (94%) and exceeds 85% (90%) for reconstructed neutrino energies between 2-5 GeV. The muon neutrino (antineutrino) event selection is found to have a maximum efficiency of 96% (97%) and exceeds 90% (95%) efficiency for reconstructed neutrino energies above 2 GeV. When considering all electron neutrino and antineutrino interactions as signal, a selection purity of 90% is achieved. These event selections are critical to maximize the sensitivity of the experiment to CP-violating effects.This document was prepared by the DUNE Collaboration using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. This work was supported by CNPq, FAPERJ, FAPEG and FAPESP, Brazil; CFI, Institute of Particle Physics and NSERC, Canada; CERN; MĆ MT, Czech Republic; ERDF, H2020-EU and MSCA, European Union; CNRS/IN2P3 and CEA, France; INFN, Italy; FCT, Portugal; NRF, South Korea; Comunidad de Madrid, FundaciĂłn "La Caixa" and MICINN, Spain; State Secretariat for Education, Research and Innovation and SNSF, Switzerland; TÜBITAK, Turkey; The Royal Society and UKRI/STFC, United Kingdom; DOE and NSF, United States of America

    A heterogeneous mobile cloud computing model for hybrid clouds

    Get PDF
    Mobile cloud computing is a paradigm that delivers applications to mobile devices by using cloud computing. In this way, mobile cloud computing allows for a rich user experience; since client applications run remotely in the cloud infrastructure, applications use fewer resources in the user's mobile devices. In this paper, we present a new mobile cloud computing model, in which platforms of volunteer devices provide part of the resources of the cloud, inspired by both volunteer computing and mobile edge computing paradigms. These platforms may be hierarchical, based on the capabilities of the volunteer devices and the requirements of the services provided by the clouds. We also describe the orchestration between the volunteer platform and the public, private or hybrid clouds. As we show, this new model can be an inexpensive solution to different application scenarios, highlighting its benefits in cost savings, elasticity, scalability, load balancing, and efficiency. Moreover, with the evaluation performed we also show that our proposed model is a feasible solution for cloud services that have a large number of mobile users. (C) 2018 Elsevier B.V. All rights reserved.This work has been partially supported by the Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD under the project grant TIN2016-79637-P TOWARDS UNIFICATION OF HPC AND BIG DATA PARADIGMS

    ComBos: a complete simulator of volunteer computing and desktop grids

    Get PDF
    Volunteer Computing is a type of distributed computing in which ordinary people donate their idle computer time to science projects like SETI@Home, Climateprediction.net and many others. In a similar way, Desktop Grid Computing is a form of distributed computing in which an organization uses its existing computers to handle its own long-running computational tasks. BOINC is the main middleware that provides a software platform for Volunteer Computing and desktop grid computing, and it became generalized as a platform for distributed applications in areas as diverse as mathematics, medicine, molecular biology, climatology, environmental science, and astrophysics. In this paper we present a complete simulator of BOINC infrastructures, called ComBoS. Although there are other BOINC simulators, none of them allow us to simulate the complete infrastructure of BOINC. Our goal was to create a complete simulator that, unlike the existing ones, could simulate realistic scenarios taking into account the whole BOINC infrastructure, that other simulators do not consider: projects, servers, network, redundant computing, scheduling, and volunteer nodes. The outputs of the simulations allow us to analyze a wide range of statistical results, such as the throughput of each project, the number of jobs executed by the clients, the total credit granted and the average occupation of the BOINC servers. The paper describes the design of ComBoS and the results of the validation performed. This validation compares the results obtained in ComBoS with the real ones of three different BOINC projects (Einstein@Home, SETI@Home and LHC@Home). Besides, we analyze the performance of the simulator in terms of memory usage and execution time. The paper also shows that our simulator can guide the design of BOINC projects, describing some case studies using ComBoS that could help designers verify the feasibility of BOINC projects. (C) 2017 Elsevier B.V. All rights reserved.This work has been partially supported by the Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD under the project grant TIN2016-79637-P TOWARDS UNIFICATION OF HPC AND BIG DATA PARADIGMS

    Enhancing the power of two choices load balancing algorithm using round robin policy

    Get PDF
    This paper proposes a new version of the power of two choices, SQ(d), load balancing algorithm. This new algorithm improves the performance of the classical model based on the power of two choices randomized load balancing. This model considers jobs that arrive at a dispatcher as a Poisson stream of rate lambdan,lambda<1, at a set of n servers. Using the power of two choices, the dispatcher chooses some d constant for each job independently and uniformly from the n servers in a random way and sends the job to the server with the fewest number of jobs. This algorithm offers an advantage over the load balancing based on shortest queue discipline, because it provides good performance and reduces the overhead in the servers and the communication network. In this paper, we propose a new version, shortest queue of d with randomization and round robin policies, SQ-RR(d). This new algorithm combines randomization techniques and static local balancing based on a round-robin policy. In this new version, the dispatcher chooses the d servers as follows: one is selected using a round-robin policy, and the d&#8722;1 servers are chosen independently and uniformly from the n servers in a random way. Then, the dispatcher sends the job to the server with the fewest number of jobs. We demonstrate with a theoretical approximation of this approach that this new version improves the performance obtained with the classical solution in all situations, including systems at 99% capacity. Furthermore, we provide simulations that demonstrate the theoretical approximation developed.This work was partially supported by the Project ‘‘CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones’’ S2018/TCS-4423 from Madrid Regional Government

    A new volunteer computing model for data-intensive applications

    Get PDF
    Volunteer computing is a type of distributed computing in which ordinary people donate computing resources to scientific projects. BOINC is the main middleware system for this type of distributed computing. The aim of volunteer computing is that organizations be able to attain large computing power thanks to the participation of volunteer clients instead of a high investment in infrastructure. There are projects, like the ATLAS&#64;Home project, in which the number of running jobs has reached a plateau, due to a high load on data servers caused by file transfer. This is why we have designed an alternative, using the same BOINC infrastructure, in order to improve the performance of BOINC projects that have reached their limit due to the I/O bottleneck in data servers. This alternative involves having a percentage of the volunteer clients running as data servers, called data volunteers, that improve the performance of the system by reducing the load on data servers. In addition, our solution takes advantage of data locality, leveraging the low network latencies of closer machines. This paper describes our alternative in detail and shows the performance of the solution, applied to 3 different BOINC projects, using a simulator of our own, ComBoS.Spanish MINISTERIO DE ECONOMÍA Y COMPETITIVIDAD, Grant/Award Number: TIN2016-79637-

    A cloudification methodology for multidimensional analysis: Implementation and application to a railway power simulator

    Get PDF
    Many scientific areas make extensive use of computer simulations to study complex real-world processes. These computations are typically very resource-intensive and present scalability issues as experiments get larger even in dedicated clusters, since these are limited by their own hardware resources. Cloud computing raises as an option to move forward into the ideal unlimited scalability by providing virtually infinite resources, yet applications must be adapted to this new paradigm. This process of converting and/or migrating an application and its data in order to make use of cloud computing is sometimes known as cloudifying the application. We propose a generalist cloudification method based in the MapReduce paradigm to migrate scientific simulations into the cloud to provide greater scalability. We analysed its viability by applying it to a real-world railway power consumption simulatior and running the resulting implementation on Hadoop YARN over Amazon EC2. Our tests show that the cloudified application is highly scalable and there is still a large margin to improve the theoretical model and its implementations, and also to extend it to a wider range of simulations. We also propose and evaluate a multidimensional analysis tool based on the cloudified application. It generates, executes and evaluates several experiments in parallel, for the same simulation kernel. The results we obtained indicate that out methodology is suitable for resource intensive simulations and multidimensional analysis, as it improves infrastructure’s utilization, efficiency and scalability when running many complex experiments.This work has been partially funded under the grant TIN2013-41350-P of the Spanish Ministry of Economics and Competitiveness, and the COST Action IC1305 "Network for Sustainable Ultrascale Computing Platforms" (NESUS)

    Efficient design assessment in the railway electric infrastructure domain using cloud computing

    Get PDF
    Nowadays, railway infrastructure designers rely heavily on computer simulators and expert systems to model, analyze and evaluate potential deployments prior to their installation. This paper presents the railway power consumption simulator model (RPCS), a cloud-based model for the design, simulation and evaluation of railway electric infrastructures. This model integrates the parameters of an infrastructure within a search engine that generates and evaluates a set of simulations to achieve optimal designs, according to a given set of objectives and restrictions. The knowledge of the domain is represented as an ontology that translates the elements in the infrastructure into an electric circuit, which is simulated to obtain a wide range of electric metrics. In order to support the execution of thousands of scenarios in a scalable, efficient and fault-tolerant manner, this paper introduces an architecture to deploy the model in a cloud environment, and a dimensioning model to find the types and number of instances that maximize performance while minimizing the externalization costs. The resulting model is applied to a particular case study, allowing the execution of over one thousand concurrent experiments in a virtual cluster on the Amazon Elastic Compute Cloud.This work has been partially funded under the grant TIN2013-41350-P of the Spanish Ministry of Economics and Competitiveness, and the COST Action IC1305 ”Network for Sustainable Ultrascale Computing Platforms” (NESUS)

    Exposing data locality in HPC-based systems by using the HDFS backend

    Get PDF
    This work was partially supported by the project “CABAHLA-CM: Convergencia Big data-Hpc: de los sensores a las Aplicaciones” S2018/TCS4423 from Madrid Regional Government and the European Union’s Horizon 2020 research, New Data Intensive Computing Methods for High-End and Edge Computing Platforms (DECIDE). Ref. PID2019-107858GB-I00 and innovation program under grant agreement No 801091, project “ÀSPIDE: Exascale programming models for extreme data processing”

    Wepsim: an online interactive educational simulator integrating microdesign, microprogramming, and assembly language programming

    Get PDF
    Our educational project has three primary goals. First, we want to provide a robust vision of how hardware and software interplay, by integrating the design of an instruction set (through microprogramming) and using that instruction set for assembly programming. Second, we wish to offer a versatile and interactive tool where the previous integrated vision could be tested. The tool we have developed to achieve this is called WepSIM and it provides the view of an elemental processor together with a microprogrammed subset of the MIPS instruction set. In addition, WepSIM is flexible enough to be adapted to other instruction sets or hardware components (e.g., ARM or x86). Third, we want to extend the activities of our university courses, labs, and lectures (fixed hours in a fixed place), so that students may learn by using their mobile device at any location, and at any time during the day. This article presents how WepSIM has improved the teaching of Computer Structure courses by empowering students with a more dynamic and guided learning process. In this paper, we show the results obtained during the experience of using the simulator in the Computer Structure course of the Bachelor&#39;s Degree in Computer Science and Engineering at University Carlos III of Madrid
    corecore